97 research outputs found

    Lung nodule diagnosis and cancer histology classification from computed tomography data by convolutional neural networks: A survey

    Get PDF
    Lung cancer is among the deadliest cancers. Besides lung nodule classification and diagnosis, developing non-invasive systems to classify lung cancer histological types/subtypes may help clinicians to make targeted treatment decisions timely, having a positive impact on patients' comfort and survival rate. As convolutional neural networks have proven to be responsible for the significant improvement of the accuracy in lung cancer diagnosis, with this survey we intend to: show the contribution of convolutional neural networks not only in identifying malignant lung nodules but also in classifying lung cancer histological types/subtypes directly from computed tomography data; point out the strengths and weaknesses of slice-based and scan-based approaches employing convolutional neural networks; and highlight the challenges and prospective solutions to successfully apply convolutional neural networks for such classification tasks. To this aim, we conducted a comprehensive analysis of relevant Scopus-indexed studies involved in lung nodule diagnosis and cancer histology classification up to January 2022, dividing the investigation in convolutional neural network-based approaches fed with planar or volumetric computed tomography data. Despite the application of convolutional neural networks in lung nodule diagnosis and cancer histology classification is a valid strategy, some challenges raised, mainly including the lack of publicly-accessible annotated data, together with the lack of reproducibility and clinical interpretability. We believe that this survey will be helpful for future studies involved in lung nodule diagnosis and cancer histology classification prior to lung biopsy by means of convolutional neural networks

    Deep learning for automatic violence detection: tests on the AIRTLab dataset

    Get PDF
    Following the growing availability of video surveillance cameras and the need for techniques to automatically identify events in video footages, there is an increasing interest towards automatic violence detection in videos. Deep learning-based architectures, such as 3D Convolutional Neural Networks, demonstrated their capability of extracting spatio-temporal features from videos, being effective in violence detection. However, friendly behaviours or fast moves such as hugs, small hits, claps, high fives, etc., can still cause false positives, interpreting a harmless action as violent. To this end, we present three deep-learning based models for violence detection and test them on the AIRTLab dataset, a novel dataset designed to check the robustness of algorithms against false positives. The objective is twofold: on one hand, we compute accuracy metrics on the three proposed models (two are based on transfer learning and one is trained from scratch), building a baseline of metrics for the AIRTLab dataset; on the other hand, we validate the capability of the proposed dataset of challenging the robustness to false positives. The results of the proposed models are in line with the scientific literature, in terms of accuracy, with transfer learning-based networks exhibiting better generalization capabilities than the trained from scratch network. Moreover, the tests highlighted that most of the classification errors concern the identification of non-violent clips, validating the design of the proposed dataset. Finally, to demonstrate the significance of the proposed models, the paper presents a comparison with the related literature, as well as with models based on well-established pre-trained 2D Convolutional Neural Networks 2D CNNs. Such comparison highlights that 3D models get better accuracy performance than time distributed 2D CNNs (merged with a recurrent model) in processing the spatio-temporal features of video clips. The source code of the experiments and the AIRTLab dataset are available in public repositories

    An Internet of Things Approach to Contact Tracing—The BubbleBox System

    Get PDF
    none7noThe COVID-19 pandemic exploded at the beginning of 2020, with over four million cases in five months, overwhelming the healthcare sector. Several national governments decided to adopt containment measures, such as lockdowns, social distancing, and quarantine. Among these measures, contact tracing can contribute in bringing under control the outbreak, as quickly identifying contacts to isolate suspected cases can limit the number of infected people. In this paper we present BubbleBox, a system relying on a dedicated device to perform contact tracing. BubbleBox integrates Internet of Things and software technologies into different components to achieve its goal—providing a tool to quickly react to further outbreaks, by allowing health operators to rapidly reach and test possible infected people. This paper describes the BubbleBox architecture, presents its prototype implementation, and discusses its pros and cons, also dealing with privacy concerns.openPolenta, Andrea; Rignanese, Pietro; Sernani, Paolo; Falcionelli, Nicola; Mekuria, Dagmawi Neway; Tomassini, Selene; Dragoni, Aldo FrancoPolenta, Andrea; Rignanese, Pietro; Sernani, Paolo; Falcionelli, Nicola; Mekuria, Dagmawi Neway; Tomassini, Selene; Dragoni, Aldo Franc

    On-cloud decision-support system for non-small cell lung cancer histology characterization from thorax computed tomography scans

    Get PDF
    Non-Small Cell Lung Cancer (NSCLC) accounts for about 85% of all lung cancers. Developing non-invasive techniques for NSCLC histology characterization may not only help clinicians to make targeted therapeutic treatments but also prevent subjects from undergoing lung biopsy, which is challenging and could lead to clinical implications. The motivation behind the study presented here is to develop an advanced on-cloud decisionsupport system, named LUCY, for non-small cell LUng Cancer histologY characterization directly from thorax Computed Tomography (CT) scans. This aim was pursued by selecting thorax CT scans of 182 LUng ADenocarcinoma (LUAD) and 186 LUng Squamous Cell carcinoma (LUSC) subjects from four openly accessible data collections (NSCLC-Radiomics, NSCLC-Radiogenomics, NSCLC-Radiomics-Genomics and TCGA-LUAD), in addition to the implementation and comparison of two end-to-end neural networks (the core layer of whom is a convolutional long short-term memory layer), the performance evaluation on test dataset (NSCLC-RadiomicsGenomics) from a subject-level perspective in relation to NSCLC histological subtype location and grade, and the dynamic visual interpretation of the achieved results by producing and analyzing one heatmap video for each scan. LUCY reached test Area Under the receiver operating characteristic Curve (AUC) values above 77% in all NSCLC histological subtype location and grade groups, and a best AUC value of 97% on the entire dataset reserved for testing, proving high generalizability to heterogeneous data and robustness. Thus, LUCY is a clinically-useful decision-support system able to timely, non-invasively and reliably provide visuallyunderstandable predictions on LUAD and LUSC subjects in relation to clinically-relevant information

    Belief Revision: From Theory To Practice

    No full text
    Belief revision is the process of rearranging a knowledge base to preserve global consistency while accomodating incoming information. Early approaches to belief revision used symbolic model-theoretic, considering the problem as one of changing a logical theory. More recent approaches have adopted qualitative syntactic methods, taking them into the area of "truth maintenance systems", and numerical mathematical methods, thus moving into the mainstream literature of uncertainty management. Multi-agent systems, in which information may come from a variety of human or artificial sources with different degrees of reliability, seem to be a natural domain for belief revision. The aim of this paper is to give a synoptic perspective of this composite subject from the clear air of the high theoretical peaks down to the muddy plain of practical algorithms

    A Model For Belief Revision In A Multi-Agent Environment

    Get PDF
    Introduction By "Belief Revision" we mean the process of detecting contradictions, identifying the assumptions from which they came out and readjusting the knowledge base to remove the contradictions. Beliefs are assumed to be expressed as sentences of first order logic stored in the agent's memory. Belief sets are not assumed to be closed under logical consequences as in [Gardenfors 90] (i.e. if K is a belief set then, generally, K¹Th(K), where Th(K) denotes all the sentences derivable by a complete first-order theorem prover from K). There are two kinds of sentences: those actually introduced as assumptions and those actually deductively derived as logical consequences of the assumptions (respectively evidences and conclusions in [Post 90]). We need an Assumption Based Truth Maintenance System [De Kleer 86] and we adopt the following modified version of the Supported Wff of Mart

    Fondamenti di Programmazione in C++. Algoritmi, strutture dati ed oggetti.

    No full text
    curatela e traduzione dell'edizione italiana del libro di Aguilar sui Fondamenti della Programmazione in C++ 540 correzioni ed emendamenti operati sul testo originari

    Belief Revision under Uncertainty in a Multi Agent Environment

    No full text
    Introduction The body of beliefs (facts and rules) accumulated in the course of time by a knowledge-based system interacting with a complex and dynamic world is destined to evolve. Some of the newcoming pieces of information integrate and corroborate the previously held corpus of sentences about the world, but others might cause serious conflicts with the established knowledge. In this case, the eventual acquisition of the new evidence should be accompanied by a partial or total reduction of the credibility of the conflicting pieces of knowledge. If the system's collection of beliefs is not a flat set of facts but contains rules, finding such conflicts and determining all the sentences involved in the contradictions can be hard because knowledge is only partially explicit. Since the seminal, influential and philosophical work of Alchourrn, Grdenfors and Makinson [1], the ideas on "belief revision" have been progressively refined [2,3] and ameliorated toward normative, effective and
    corecore